DTE AICCOMAS 2025

Coupled Lie-Poisson Neural Networks (CLPNets): Data-Based Computing of Coupled Hamiltonian Systems

  • Putkaradze, Vakhtang (University of Alberta)

Please login to view abstract download link

Physics-Informed Neural Networks (PINNs) have received much attention recently due to their potential for high-performance computations for complex physical systems. The idea of PINNs is to approximate the equations and boundary and initial conditions through a loss function for a neural network. PINNs combine the efficiency of data-based prediction with the accuracy and insights provided by the physical models. However, applications of these methods to predict the long-term evolution of systems with little friction, such as many systems encountered in space exploration, oceanography/climate, and many other fields, need extra care as the errors tend to accumulate, and the results may quickly become unreliable. We provide a solution to the problem of data-based computation of Hamiltonian systems utilizing symmetry methods, paying special attention to systems that come from the discretization of continuum mechanics systems. For example, for simulations, a continuum elastic rod can be discretized into coupled elements with dynamics depending on the relative position and orientation of neighboring elements. For data-based computing of such systems, we design the Coupled Lie-Poisson neural networks (CLPNets). We consider the Poisson bracket structure primary and require it to be satisfied exactly, whereas the Hamiltonian, only known from physics, can be satisfied approximately. By design, the method preserves all special integrals of the bracket (Casimirs) to machine precision. We present applications of CLPNets applications for several particular cases, such as coupled rigid bodies or elastically connected elements. CLPNets yield surprising robustness for increasing the dimensionality of the system, enabling the computing of dynamics for a high number of dimensions (up to 18) using networks with a small number of parameters (one to two hundred) and only one to two thousand data points used for learning.